[18.0][IMP] auditlog_clickhouse: add read via pg_clickhouse#3
[18.0][IMP] auditlog_clickhouse: add read via pg_clickhouse#3
Conversation
Task: 5246
Task: 5246
WalkthroughAdds a new Odoo module auditlog_clickhouse: buffered async export of audit logs to ClickHouse, client factory, FDW read-only integration, UI/config, cron/queue jobs, tests, and documentation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
6ecdac5 to
bd4cbcc
Compare
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@auditlog_clickhouse/__manifest__.py`:
- Line 21: Remove the redundant "installable": True entry from the manifest
dictionary in __manifest__.py; locate the dictionary that contains the
"installable" key and delete that key/value pair so the manifest relies on the
default (True) behavior.
In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py`:
- Around line 79-84: Add a strict positive-value validation for the
queue_batch_size field: in the model class that defines queue_batch_size (e.g.,
AuditlogClickhouseConfig), add an `@api.constrains`('queue_batch_size') method
(e.g., _check_queue_batch_size) that raises a ValidationError when
self.queue_batch_size is less than or equal to 0; this ensures create/write
operations and UI forms cannot set zero or negative values and prevents invalid
batch sizes from reaching flush jobs or SQL LIMITs.
- Around line 184-228: The create and write paths can activate multiple records
in one ORM call so calling active_records._deactivate_other_configs() (or
activated._deactivate_other_configs()) lets multiple active configs persist
because the helper excludes all ids on the same recordset; fix by reducing the
caller to a single canonical active record before deactivation: when
len(active_records) or len(activated) > 1 select one representative (e.g. the
first/singleton slice of active_records or activated) and call that
single-record ._deactivate_other_configs() so all other configs (including other
newly-created/updated records) are deactivated and the single-active invariant
is preserved (apply same change in both create and write).
- Around line 357-402: The generated CREATE TABLE DDL interpolates db_name raw
(db_name) into the SQL for the tables auditlog_log and auditlog_log_line; fix by
quoting identifiers and escaping any backticks in db_name before interpolation
(e.g. produce `db_name`.`auditlog_log` and `db_name`.`auditlog_log_line`), or
use a small helper to quote_identifier(name) and apply it to db_name and the
table names when building the f-strings so special characters and reserved words
are safe.
In `@auditlog_clickhouse/models/auditlog_log_buffer.py`:
- Around line 307-333: The current try/except around client.execute for the two
INSERTs can cause duplicate audit rows when the first INSERT (auditlog_log)
succeeds and the second (auditlog_log_line) fails, because RetryableJobError
simply retries the whole batch; modify the write path in the method containing
the two client.execute calls so that inserts are idempotent before retrying: add
a deterministic deduplication token or unique id to each log row (e.g., add a
dedup_id column to rows created for the self._CH_LOG_COLUMNS and
self._CH_LINE_COLUMNS), persist that token with the rows on insert, and change
the insert logic to either (a) use an upsert/INSERT ... SELECT that filters out
dedup_ids already present, or (b) record per-buffer success state in a small
durable state table and only resend rows for buffers that previously failed;
update the exception handling to only raise RetryableJobError for the subset of
rows that actually failed, referencing the same dedup_id values so retries are
safe (affecting the code around client.execute, the construction of
log_rows/line_rows, and the RetryableJobError raise).
- Around line 309-317: The INSERT statements interpolate config.database and
unquoted table names; wrap database and table identifiers in ClickHouse
identifier quotes and sanitize the user-supplied database name to avoid
injection/invalid identifiers. Update the two client.execute calls that build
the INSERTs (the ones using self._CH_LOG_COLUMNS and self._CH_LINE_COLUMNS) to
use quoted identifiers like `\`{sanitized_db}\`.\`auditlog_log\`` and
`\`{sanitized_db}\`.\`auditlog_log_line\``, where sanitized_db =
config.database.replace("`", "``") (or an equivalent escape) so any backticks
are escaped before formatting. Ensure the rest of the query (column list and
VALUES) is left unchanged and only the database/table identifiers are changed.
In `@auditlog_clickhouse/readme/CONFIGURE.md`:
- Around line 31-32: Add a missing configuration step instructing users to click
the form button labeled "Setup FDW read" (backend action: action_setup_fdw_read)
after creating the audit tables; update CONFIGURE.md right after the "Create
Auditlog Tables" step to tell users to click "Setup FDW read" to configure the
pg_clickhouse FDW read path (ensure pg_clickhouse is available/enabled before
running this action).
In `@auditlog_clickhouse/readme/DESCRIPTION.md`:
- Line 1: Update the module description text to fix the grammar: replace
"transfers audit of logs" with "transfers of audit logs" so the sentence reads
"This module implements buffered asynchronous transfers of audit logs from
PostgreSQL to ClickHouse." Locate and edit the DESCRIPTION.md content (the
module description string) to use the corrected phrasing.
In `@auditlog_clickhouse/views/auditlog_clickhouse_config_views.xml`:
- Around line 71-78: The button labeled "Configure action"
(name="%(action_configure_auditlog_clickhouse_flush_cron)d") is misleading
because it opens the flush cron form
(ir_cron_auditlog_clickhouse_enqueue_flush); update the button's string to a
clearer label such as "Configure flush cron" so the UI accurately reflects the
target action (action_configure_auditlog_clickhouse_flush_cron ->
ir_cron_auditlog_clickhouse_enqueue_flush).
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
auditlog_clickhouse/security/ir.model.access.csvis excluded by!**/*.csv
📒 Files selected for processing (23)
auditlog_clickhouse/README.rstauditlog_clickhouse/__init__.pyauditlog_clickhouse/__manifest__.pyauditlog_clickhouse/data/auditlog_clickhouse_queue.xmlauditlog_clickhouse/i18n/auditlog_clickhouse.potauditlog_clickhouse/models/__init__.pyauditlog_clickhouse/models/auditlog_clickhouse_config.pyauditlog_clickhouse/models/auditlog_log_buffer.pyauditlog_clickhouse/models/auditlog_readonly.pyauditlog_clickhouse/models/auditlog_rule.pyauditlog_clickhouse/models/clickhouse_client.pyauditlog_clickhouse/pyproject.tomlauditlog_clickhouse/readme/CONFIGURE.mdauditlog_clickhouse/readme/CONTEXT.mdauditlog_clickhouse/readme/DESCRIPTION.mdauditlog_clickhouse/readme/USAGE.mdauditlog_clickhouse/static/description/index.htmlauditlog_clickhouse/tests/__init__.pyauditlog_clickhouse/tests/common.pyauditlog_clickhouse/tests/test_auditlog_clickhouse.pyauditlog_clickhouse/tests/test_clickhouse_config.pyauditlog_clickhouse/views/auditlog_clickhouse_config_views.xmlrequirements.txt
Allow auditors to browse, search and filter audit logs in the standard Odoo Audit Log UI while keeping the data stored externally in ClickHouse. Configure Postgres FDW (pg_clickhouse) so auditlog models read from ClickHouse without changing existing auditlog views, and ensure queries (search/filters/group by) are forwarded through FDW. Keep the audit trail immutable by blocking UI create/write/unlink for auditlog records when ClickHouse/FDW mode is enabled. Task: 5248
bd4cbcc to
bf5c07f
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (8)
auditlog_clickhouse/models/auditlog_clickhouse_config.py (3)
349-402: Quote ClickHouse identifiers in generated DDL.
db_nameis interpolated raw intoCREATE TABLEstatements. Based on learnings, the proper approach for handling database and table identifiers in ClickHouse queries is to quote them (using backticks or double quotes) rather than validating with regex.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py` around lines 349 - 402, The generated DDL in _get_clickhouse_ddl interpolates db_name directly into the CREATE TABLE statements for auditlog_log and auditlog_log_line; update the function to quote the database and table identifiers (e.g., wrap {db_name} and the table names with backticks or double quotes) wherever they are used in the returned SQL strings so identifiers are safely quoted in all CREATE TABLE statements and ORDER BY clauses.
79-84: Validatequeue_batch_sizeas strictly positive.A zero or negative
queue_batch_sizewill break SQLLIMITsemantics in the flush jobs. Add a SQL constraint or@api.constrainscheck.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py` around lines 79 - 84, Add a strict positive validation for the queue_batch_size field on the AuditlogClickhouseConfig model: either add an _sql_constraints entry enforcing "queue_batch_size > 0" or implement an `@api.constrains`("queue_batch_size") method (e.g., _check_queue_batch_size) that raises a ValidationError when queue_batch_size <= 0; reference the queue_batch_size field and DEFAULT_QUEUE_BATCH_SIZE constant so the check rejects zero or negative values before any flush job runs.
184-228: Single-active invariant can be broken by multi-record create/write.If multiple records are activated in one ORM call,
_deactivate_other_configs()excludes allself.ids, allowing multiple active configs to persist.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py` around lines 184 - 228, The current create and write flows can leave multiple configs active when multiple records are activated at once because _deactivate_other_configs() excludes all of self.ids; fix by ensuring you only exclude a single chosen keeper before deactivating others: in create(), after computing active_records, if len(active_records) > 1 pick one keeper (e.g., keeper = active_records[0]) and call keeper._deactivate_other_configs() (or change _deactivate_other_configs to accept an exclude_id(s) and call it with only the keeper.id); similarly in write() when turning_on, compute activated = self.filtered("is_active") and if len(activated) > 1 pick one keeper and call keeper._deactivate_other_configs() (or pass the single keeper id) so only one active config remains. Ensure _deactivate_other_configs signature/implementation is adjusted accordingly if needed.auditlog_clickhouse/readme/CONFIGURE.md (1)
31-32: Missing configuration step: "Setup FDW read".The PR's primary goal is enabling reads via
pg_clickhouse. The form view exposes a "Setup FDW read" button, but CONFIGURE.md doesn't instruct users to click it after table creation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/readme/CONFIGURE.md` around lines 31 - 32, CONFIGURE.md is missing the post-creation step to enable reads via pg_clickhouse: after instructing users to click "Create Auditlog Tables" add a step telling them to click the "Setup FDW read" button in the form view so the Foreign Data Wrapper for pg_clickhouse is configured; mention that this is required to enable reads and reference the "Setup FDW read" control and the pg_clickhouse integration so users complete FDW configuration after table creation.auditlog_clickhouse/readme/DESCRIPTION.md (1)
1-1: Grammar: "transfers audit of logs" → "transfers of audit logs".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/readme/DESCRIPTION.md` at line 1, Update the DESCRIPTION.md sentence that currently reads "transfers audit of logs" to the correct phrasing "transfers of audit logs" (replace the exact substring "transfers audit of logs" with "transfers of audit logs") so the sentence becomes "This module implements buffered asynchronous transfers of audit logs from PostgreSQL to ClickHouse."auditlog_clickhouse/views/auditlog_clickhouse_config_views.xml (1)
71-78: Misleading button label: "Configure action"This button opens the flush cron form. The label should be more descriptive. The action name on line 6 was updated to "Configure flush action" but the button string on line 74 still says "Configure action".
✏️ Proposed fix
<button name="%(action_configure_auditlog_clickhouse_flush_cron)d" type="action" - string="Configure action" + string="Configure flush cron" class="oe_link" icon="fa-arrow-right" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/views/auditlog_clickhouse_config_views.xml` around lines 71 - 78, The button label is misleading: update the button element with name "%(action_configure_auditlog_clickhouse_flush_cron)d" (the button that opens the flush cron form) to use a more descriptive string such as "Configure flush action" by changing its string attribute so it matches the action name and accurately describes the flush cron configuration.auditlog_clickhouse/models/auditlog_log_buffer.py (2)
304-332: Unquoted ClickHouse identifiers in INSERT statements.
config.databaseis user-supplied and interpolated directly into the SQL string without identifier quoting. Table names should also be quoted for safety.✏️ Proposed fix
+ `@staticmethod` + def _quote_ch_ident(name: str) -> str: + """Quote a ClickHouse identifier, escaping embedded backticks.""" + return f"`{(name or '').replace('`', '``')}`" + def _insert_rows_to_clickhouse( self, client, config, log_rows, line_rows, valid_buffers ): try: + db = self._quote_ch_ident(config.database) if log_rows: client.execute( - f"INSERT INTO {config.database}.auditlog_log (" + f"INSERT INTO {db}.`auditlog_log` (" f"{', '.join(self._CH_LOG_COLUMNS)}) VALUES", log_rows, ) if line_rows: client.execute( - f"INSERT INTO {config.database}.auditlog_log_line (" + f"INSERT INTO {db}.`auditlog_log_line` (" f"{', '.join(self._CH_LINE_COLUMNS)}) VALUES", line_rows, )Based on learnings, in the auditlog_clickhouse module, ClickHouse queries should quote database/table identifiers using backticks to handle special characters and reserved words correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/models/auditlog_log_buffer.py` around lines 304 - 332, The INSERTs in _insert_rows_to_clickhouse interpolate config.database and table/column names without quoting; update the two client.execute calls that build INSERT INTO ... auditlog_log and INSERT INTO ... auditlog_log_line so the database and table identifiers are quoted with backticks (e.g. `config.database`.`auditlog_log`) and each column name from self._CH_LOG_COLUMNS / self._CH_LINE_COLUMNS is also wrapped in backticks; ensure you escape any backticks in config.database and in column names (replace/backslash them) before interpolation to avoid injection or syntax errors and keep the existing exception handling/RetryableJobError behavior.
307-332: Partial INSERT success causes duplicate rows on retry.If the first INSERT (
auditlog_log) succeeds but the second (auditlog_log_line) fails,RetryableJobErrorretries the entire batch, re-inserting already-persisted log rows. Sinceauditlog_loglikely uses aMergeTreeengine, this produces silent duplicates.Consider one of:
- Wrap both INSERTs in a single transaction (if using ClickHouse's lightweight transactions).
- Use a
ReplacingMergeTreeengine so duplicate rows are deduplicated.- Split the two INSERTs so that on retry, only the failed half is re-sent (track partial progress in the buffer state).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/models/auditlog_log_buffer.py` around lines 307 - 332, The current try/except retries both INSERTs together which causes duplicate auditlog_log rows when the first INSERT (client.execute with self._CH_LOG_COLUMNS) succeeds but the second (client.execute with self._CH_LINE_COLUMNS) fails; change the flow to perform the auditlog_log INSERT first, then persist progress (e.g., update the buffer state or remove/mark the successfully-inserted log_rows from valid_buffers) before issuing the auditlog_log_line INSERT so that if the second INSERT fails the retry only resends the remaining line_rows; adjust the exception handling around each client.execute call so RetryableJobError is raised only for the failed half and reference client.execute, self._CH_LOG_COLUMNS, self._CH_LINE_COLUMNS, valid_buffers and RetryableJobError when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py`:
- Around line 581-584: The comment inside the _create_foreign_tables method
contains an AI-generated artifact ":contentReference[oaicite:1]{index=1}";
remove that fragment so the comment reads cleanly (e.g., "pg_clickhouse foreign
table options: table_name, (optional) database") and keep the rest of the method
intact (refer to the _create_foreign_tables function and the db_opt local
variable to locate the code).
In `@auditlog_clickhouse/models/auditlog_readonly.py`:
- Around line 5-8: The function _is_clickhouse_readonly_mode performs a
redundant check for config.is_active even though get_active_config() already
filters by is_active=True; update the return expression to drop the unnecessary
config.is_active check and just return a boolean based on config existing and
the fdw_enabled flag (e.g., use bool(config and getattr(config, "fdw_enabled",
False))). Keep references to the same env lookup
(env["auditlog.clickhouse.config"].sudo().get_active_config()) and the config
variable so the change is localized to the return expression in
_is_clickhouse_readonly_mode.
---
Duplicate comments:
In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py`:
- Around line 349-402: The generated DDL in _get_clickhouse_ddl interpolates
db_name directly into the CREATE TABLE statements for auditlog_log and
auditlog_log_line; update the function to quote the database and table
identifiers (e.g., wrap {db_name} and the table names with backticks or double
quotes) wherever they are used in the returned SQL strings so identifiers are
safely quoted in all CREATE TABLE statements and ORDER BY clauses.
- Around line 79-84: Add a strict positive validation for the queue_batch_size
field on the AuditlogClickhouseConfig model: either add an _sql_constraints
entry enforcing "queue_batch_size > 0" or implement an
`@api.constrains`("queue_batch_size") method (e.g., _check_queue_batch_size) that
raises a ValidationError when queue_batch_size <= 0; reference the
queue_batch_size field and DEFAULT_QUEUE_BATCH_SIZE constant so the check
rejects zero or negative values before any flush job runs.
- Around line 184-228: The current create and write flows can leave multiple
configs active when multiple records are activated at once because
_deactivate_other_configs() excludes all of self.ids; fix by ensuring you only
exclude a single chosen keeper before deactivating others: in create(), after
computing active_records, if len(active_records) > 1 pick one keeper (e.g.,
keeper = active_records[0]) and call keeper._deactivate_other_configs() (or
change _deactivate_other_configs to accept an exclude_id(s) and call it with
only the keeper.id); similarly in write() when turning_on, compute activated =
self.filtered("is_active") and if len(activated) > 1 pick one keeper and call
keeper._deactivate_other_configs() (or pass the single keeper id) so only one
active config remains. Ensure _deactivate_other_configs signature/implementation
is adjusted accordingly if needed.
In `@auditlog_clickhouse/models/auditlog_log_buffer.py`:
- Around line 304-332: The INSERTs in _insert_rows_to_clickhouse interpolate
config.database and table/column names without quoting; update the two
client.execute calls that build INSERT INTO ... auditlog_log and INSERT INTO ...
auditlog_log_line so the database and table identifiers are quoted with
backticks (e.g. `config.database`.`auditlog_log`) and each column name from
self._CH_LOG_COLUMNS / self._CH_LINE_COLUMNS is also wrapped in backticks;
ensure you escape any backticks in config.database and in column names
(replace/backslash them) before interpolation to avoid injection or syntax
errors and keep the existing exception handling/RetryableJobError behavior.
- Around line 307-332: The current try/except retries both INSERTs together
which causes duplicate auditlog_log rows when the first INSERT (client.execute
with self._CH_LOG_COLUMNS) succeeds but the second (client.execute with
self._CH_LINE_COLUMNS) fails; change the flow to perform the auditlog_log INSERT
first, then persist progress (e.g., update the buffer state or remove/mark the
successfully-inserted log_rows from valid_buffers) before issuing the
auditlog_log_line INSERT so that if the second INSERT fails the retry only
resends the remaining line_rows; adjust the exception handling around each
client.execute call so RetryableJobError is raised only for the failed half and
reference client.execute, self._CH_LOG_COLUMNS, self._CH_LINE_COLUMNS,
valid_buffers and RetryableJobError when making these changes.
In `@auditlog_clickhouse/readme/CONFIGURE.md`:
- Around line 31-32: CONFIGURE.md is missing the post-creation step to enable
reads via pg_clickhouse: after instructing users to click "Create Auditlog
Tables" add a step telling them to click the "Setup FDW read" button in the form
view so the Foreign Data Wrapper for pg_clickhouse is configured; mention that
this is required to enable reads and reference the "Setup FDW read" control and
the pg_clickhouse integration so users complete FDW configuration after table
creation.
In `@auditlog_clickhouse/readme/DESCRIPTION.md`:
- Line 1: Update the DESCRIPTION.md sentence that currently reads "transfers
audit of logs" to the correct phrasing "transfers of audit logs" (replace the
exact substring "transfers audit of logs" with "transfers of audit logs") so the
sentence becomes "This module implements buffered asynchronous transfers of
audit logs from PostgreSQL to ClickHouse."
In `@auditlog_clickhouse/views/auditlog_clickhouse_config_views.xml`:
- Around line 71-78: The button label is misleading: update the button element
with name "%(action_configure_auditlog_clickhouse_flush_cron)d" (the button that
opens the flush cron form) to use a more descriptive string such as "Configure
flush action" by changing its string attribute so it matches the action name and
accurately describes the flush cron configuration.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (12)
auditlog_clickhouse/README.rstauditlog_clickhouse/i18n/auditlog_clickhouse.potauditlog_clickhouse/models/__init__.pyauditlog_clickhouse/models/auditlog_clickhouse_config.pyauditlog_clickhouse/models/auditlog_log_buffer.pyauditlog_clickhouse/models/auditlog_readonly.pyauditlog_clickhouse/models/auditlog_rule.pyauditlog_clickhouse/readme/CONFIGURE.mdauditlog_clickhouse/readme/DESCRIPTION.mdauditlog_clickhouse/readme/USAGE.mdauditlog_clickhouse/static/description/index.htmlauditlog_clickhouse/views/auditlog_clickhouse_config_views.xml
There was a problem hiding this comment.
♻️ Duplicate comments (1)
auditlog_clickhouse/models/auditlog_clickhouse_config.py (1)
365-410: 🧹 Nitpick | 🔵 TrivialQuote ClickHouse identifiers in DDL for robustness.
The
db_nameis interpolated raw intoCREATE TABLEstatements. While this is admin-configured (not user input), quoting identifiers prevents syntax errors from special characters or reserved words.♻️ Proposed fix
def _get_clickhouse_ddl(self) -> list[str]: ... self.ensure_one() - db_name = self.database + # Quote to handle special chars/reserved words safely + db_name = f"`{(self.database or '').replace('`', '``')}`" return [ f""" - CREATE TABLE IF NOT EXISTS {db_name}.auditlog_log + CREATE TABLE IF NOT EXISTS {db_name}.`auditlog_log` ... """, f""" - CREATE TABLE IF NOT EXISTS {db_name}.auditlog_log_line + CREATE TABLE IF NOT EXISTS {db_name}.`auditlog_log_line` ... """, ]Based on learnings: "In the auditlog_clickhouse module, when constructing ClickHouse queries, quote database/table identifiers (using backticks or double quotes) instead of relying on regex-based validation."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py` around lines 365 - 410, The DDL uses raw db_name interpolation into the CREATE TABLE f-strings (see the db_name variable and the CREATE TABLE statements building auditlog_log and auditlog_log_line) which can break with special characters or reserved words; update the code that constructs these strings to properly quote the database and table identifiers (e.g., wrap {db_name} and table names in ClickHouse identifier quotes/backticks or use a small helper like quote_identifier(db_name) and apply it where the f-strings build the CREATE TABLE statements) so identifiers are always quoted when interpolated into the DDL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@auditlog_clickhouse/models/auditlog_clickhouse_config.py`:
- Around line 365-410: The DDL uses raw db_name interpolation into the CREATE
TABLE f-strings (see the db_name variable and the CREATE TABLE statements
building auditlog_log and auditlog_log_line) which can break with special
characters or reserved words; update the code that constructs these strings to
properly quote the database and table identifiers (e.g., wrap {db_name} and
table names in ClickHouse identifier quotes/backticks or use a small helper like
quote_identifier(db_name) and apply it where the f-strings build the CREATE
TABLE statements) so identifiers are always quoted when interpolated into the
DDL.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (8)
auditlog_clickhouse/README.rstauditlog_clickhouse/__manifest__.pyauditlog_clickhouse/models/auditlog_clickhouse_config.pyauditlog_clickhouse/models/auditlog_readonly.pyauditlog_clickhouse/readme/CONFIGURE.mdauditlog_clickhouse/readme/DESCRIPTION.mdauditlog_clickhouse/static/description/index.htmlauditlog_clickhouse/views/auditlog_clickhouse_config_views.xml
| - `INSERT` and `CREATE TABLE` privileges on the target database. | ||
| - The `pg_clickhouse` extension installed on the PostgreSQL server. | ||
|
|
||
| > ClickHouse installation (Docker guide): |
There was a problem hiding this comment.
Please remove the section related to ClickHouse installation, as it falls outside the scope of this module.
New PR that closes this one - #4
Task: 5248
Summary by CodeRabbit
New Features
Documentation
Tests
Chores